Gradient Boost

Before moving forward with the to-do list, let’s throw a Random Forest to it.

Gradient boost

For many reasons, Random Forest is usually a very good baseline model. In this particular case I started with the polynomial OLS as baseline model, just because it was so evident from the correlations that the relationship between temperature and consumption follows a polynomial shape. But let’s go back to a beloved RF.

Model Cards provide a framework for transparent, responsible reporting. 
 Use the vetiver `.qmd` Quarto template as a place to start, 
 with vetiver.model_card()
Writing pin:
Name: 'wd-gb'
Version: 20241227T102939Z-776ff
<vetiver.vetiver_model.VetiverModel at 0x7f0c08205060>

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.346099 1.931561 2.018334 1.248131
MSE - Mean Squared Error 3.446144 14.335157 9.394741 2.861698
RMSE - Root Mean Squared Error 1.856379 3.786180 2.761768 1.691422
R2 - Coefficient of Determination 0.963260 0.817414 -1.421431 0.970729
MAPE - Mean Absolute Percentage Error 0.126489 0.188744 0.326787 0.104960
EVS - Explained Variance Score 0.963260 0.825083 -0.479618 0.970729
MeAE - Median Absolute Error 0.975683 1.321576 1.499568 0.950178
D2 - D2 Absolute Error Score 0.810156 0.689225 -0.374103 0.822316
Pinball - Mean Pinball Loss 0.673049 0.965780 1.009167 0.624066

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Again, overfits a lot.

Parameter: param_model__learning_rate

Parameter: param_model__max_depth

Parameter: param_model__min_samples_leaf

Parameter: param_model__min_samples_split

Parameter: param_model__n_estimators

Parameter: param_model__subsample

Parameter: param_vars__columns

Best model

{'model__learning_rate': 0.1,
 'model__max_depth': 5,
 'model__min_samples_leaf': 5,
 'model__min_samples_split': 48,
 'model__n_estimators': 60,
 'model__subsample': 1,
 'vars__columns': ['rf_tu_mean', 'vp_std_mean']}
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

Metrics

Single Split CV
train test test train
MAE - Mean Absolute Error 1.509207 1.952603 2.116290 1.521850
MSE - Mean Squared Error 4.942120 15.258908 7.813904 4.990028
RMSE - Root Mean Squared Error 2.223088 3.906265 2.662994 2.230651
R2 - Coefficient of Determination 0.947312 0.805648 -1.472599 0.948840
MAPE - Mean Absolute Percentage Error 0.134463 0.191944 0.365520 0.117866
EVS - Explained Variance Score 0.947312 0.815555 -0.282647 0.948840
MeAE - Median Absolute Error 0.987693 1.204841 1.745975 1.038812
D2 - D2 Absolute Error Score 0.787152 0.685840 -0.502203 0.783162
Pinball - Mean Pinball Loss 0.754603 0.976302 1.058145 0.760925

Scatter plot matrix

Observed vs. Predicted and Residuals vs. Predicted

Check for …

check the residuals to assess the goodness of fit.

  • white noise or is there a pattern?
  • heteroscedasticity?
  • non-linearity?

Normality of Residuals:

Check for …

  • Are residuals normally distributed?

Leverage

Scale-Location plot

Residuals Autocorrelation Plot

Residuals vs Time

Compare vanilla vs. tuned

Metrics

Single split

Metrics based on the test set of the single split

Cross validation

Predictions, residuals, observed

next

Time vs. Predicted and Observed

Time vs. Residuals

Model details

Pipeline(steps=[('vars',
                 ColumnSelector(columns=['tt_tu_mean', 'rf_tu_mean', 'td_mean',
                                         'vp_std_mean', 'tf_std_mean'])),
                ('model', GradientBoostingRegressor(random_state=7))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.
Pipeline(steps=[('vars', ColumnSelector(columns=['rf_tu_mean', 'vp_std_mean'])),
                ('model',
                 GradientBoostingRegressor(max_depth=5, min_samples_leaf=5,
                                           min_samples_split=48,
                                           n_estimators=60, random_state=7,
                                           subsample=1))])
In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook.
On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.

TODOs